20 research outputs found

    Learning Convex Partitions and Computing Game-theoretic Equilibria from Best Response Queries

    Full text link
    Suppose that an mm-simplex is partitioned into nn convex regions having disjoint interiors and distinct labels, and we may learn the label of any point by querying it. The learning objective is to know, for any point in the simplex, a label that occurs within some distance ϵ\epsilon from that point. We present two algorithms for this task: Constant-Dimension Generalised Binary Search (CD-GBS), which for constant mm uses poly(n,log(1ϵ))poly(n, \log \left( \frac{1}{\epsilon} \right)) queries, and Constant-Region Generalised Binary Search (CR-GBS), which uses CD-GBS as a subroutine and for constant nn uses poly(m,log(1ϵ))poly(m, \log \left( \frac{1}{\epsilon} \right)) queries. We show via Kakutani's fixed-point theorem that these algorithms provide bounds on the best-response query complexity of computing approximate well-supported equilibria of bimatrix games in which one of the players has a constant number of pure strategies. We also partially extend our results to games with multiple players, establishing further query complexity bounds for computing approximate well-supported equilibria in this setting.Comment: 38 pages, 7 figures, second version strengthens lower bound in Theorem 6, adds footnotes with additional comments and fixes typo

    Detecting controlling nodes of boolean regulatory networks

    Get PDF
    Boolean models of regulatory networks are assumed to be tolerant to perturbations. That qualitatively implies that each function can only depend on a few nodes. Biologically motivated constraints further show that functions found in Boolean regulatory networks belong to certain classes of functions, for example, the unate functions. It turns out that these classes have specific properties in the Fourier domain. That motivates us to study the problem of detecting controlling nodes in classes of Boolean networks using spectral techniques. We consider networks with unbalanced functions and functions of an average sensitivity less than 23k, where k is the number of controlling variables for a function. Further, we consider the class of 1-low networks which include unate networks, linear threshold networks, and networks with nested canalyzing functions. We show that the application of spectral learning algorithms leads to both better time and sample complexity for the detection of controlling nodes compared with algorithms based on exhaustive search. For a particular algorithm, we state analytical upper bounds on the number of samples needed to find the controlling nodes of the Boolean functions. Further, improved algorithms for detecting controlling nodes in large-scale unate networks are given and numerically studied

    Blockwise pp-Tampering Attacks on Cryptographic Primitives, Extractors, and Learners

    Get PDF
    Austrin, Chung, Mahmoody, Pass and Seth (Crypto\u2714) studied the notion of bitwise pp-tampering attacks over randomized algorithms in which an efficient `virus\u27 gets to control each bit of the randomness with independent probability pp in an online way. The work of Austrin et al. showed how to break certain `privacy primitives\u27 (e.g., encryption, commitments, etc.) through bitwise pp-tampering, by giving a bitwise pp-tampering biasing attack for increasing the average E[f(Un)]E[f(U_n)] of any efficient function f ⁣:{0,1}n[1,+1]f \colon \{0,1\}^n \to [-1,+1] by Ω(pVar[f(Un)])\Omega(p \cdot Var[f(U_n)]) where Var[f(Un)]Var[f(U_n)] is the variance of f(Un)f(U_n). In this work, we revisit and extend the bitwise tampering model of Austrin et al. to blockwise setting, where blocks of randomness becomes tamperable with independent probability pp. Our main result is an efficient blockwise pp-tampering attack to bias the average E[f(X)]E[f(X)] of any efficient function ff mapping arbitrary XX to [1,+1][-1,+1] by Ω(pVar[f(X)])\Omega(p \cdot Var[f(X)]) regardless of how XX is partitioned into individually tamperable blocks X=(X1,,Xn)X=(X_1,\dots,X_n). Relying on previous works, our main biasing attack immediately implies efficient attacks against the privacy primitives as well as seedless multi-source extractors, in a model where the attacker gets to tamper with each block (or source) of the randomness with independent probability pp. Further, we show how to increase the classification error of deterministic learners in the so called `targeted poisoning\u27 attack model under Valiant\u27s adversarial noise. In this model, an attacker has a `target\u27 test data dd in mind and wishes to increase the error of classifying dd while she gets to tamper with each training example with independent probability pp an in an online way

    A Fourier Analysis Based Attack against Physically Unclonable Functions

    Get PDF
    Electronic payment systems have leveraged the advantages offered by the RFID technology, whose security is promised to be improved by applying the notion of Physically Unclonable Functions (PUFs). Along with the evolution of PUFs, numerous successful attacks against PUFs have been proposed in the literature. Among these are machine learning (ML) attacks, ranging from heuristic approaches to provable algorithms, that have attracted great attention. Our paper pursues this line of research by introducing a Fourier analysis based attack against PUFs. More specifically, this paper focuses on two main aspects of ML attacks, namely being provable and noise tolerant. In this regard, we prove that our attack is naturally integrated into a provable Probably Approximately Correct (PAC) model. Moreover, we show that our attacks against known PUF families are effective and applicable even in the presence of noise. Our proof relies heavily on the intrinsic properties of these PUF families, namely arbiter, Ring Oscillator (RO), and Bistable Ring (BR) PUF families. We believe that our new style of ML algorithms, which take advantage of the Fourier analysis principle, can offer better measures of PUF security

    On the Complexity of Compressing Obfuscation

    Get PDF
    Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct. In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al.~(PKC 2016) and Bitansky et al.~(TCC 2016) by allowing for a broader regime of parameters. We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation. First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives. Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions. Lastly, we characterize the existence of compressing obfuscation with \emph{statistical} security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory

    Materiaux : science et industrie, Paris, 9-11 janvier 1989

    No full text
    SIGLEAvailable at INIST (FR), Document Supply Service, under shelf-number : RP 400 (1536) / INIST-CNRS - Institut de l'Information Scientifique et TechniqueFRFranc

    On the Limit of Some Algorithmic Approach to Circuit Lower Bounds

    No full text
    corecore